237 research outputs found

    Hyperspectral Super-Resolution with Coupled Tucker Approximation: Recoverability and SVD-based algorithms

    Full text link
    We propose a novel approach for hyperspectral super-resolution, that is based on low-rank tensor approximation for a coupled low-rank multilinear (Tucker) model. We show that the correct recovery holds for a wide range of multilinear ranks. For coupled tensor approximation, we propose two SVD-based algorithms that are simple and fast, but with a performance comparable to the state-of-the-art methods. The approach is applicable to the case of unknown spatial degradation and to the pansharpening problem.Comment: IEEE Transactions on Signal Processing, Institute of Electrical and Electronics Engineers, in Pres

    On-line blind unmixing for hyperspectral pushbroom imaging systems

    Get PDF
    International audienceIn this paper, the on-line hyperspectral image blind unmixing is addressed. Inspired by the Incremental Non-negative Matrix Factorization (INMF) method, we propose an on-line NMF which is adapted to the acquisition scheme of a pushbroom imager. Because of the non-uniqueness of the NMF model, a minimum volume constraint on the endmembers is added allowing to reduce the set of admissible solutions. This results in a stable algorithm yielding results similar to those of standard off-line NMF methods, but drastically reducing the computation time. The algorithm is applied to wood hyperspectral images showing that such a technique is effective for the on-line prediction of wood piece rendering after finishing. Index Terms— Hyperspectral imaging, Pushbroom imager, On-line Non-negative Matrix Factorization, Minimum volume constraint

    Homotopy based algorithms for â„“0\ell_0-regularized least-squares

    Get PDF
    Sparse signal restoration is usually formulated as the minimization of a quadratic cost function ∥y−Ax∥22\|y-Ax\|_2^2, where A is a dictionary and x is an unknown sparse vector. It is well-known that imposing an ℓ0\ell_0 constraint leads to an NP-hard minimization problem. The convex relaxation approach has received considerable attention, where the ℓ0\ell_0-norm is replaced by the ℓ1\ell_1-norm. Among the many efficient ℓ1\ell_1 solvers, the homotopy algorithm minimizes ∥y−Ax∥22+λ∥x∥1\|y-Ax\|_2^2+\lambda\|x\|_1 with respect to x for a continuum of λ\lambda's. It is inspired by the piecewise regularity of the ℓ1\ell_1-regularization path, also referred to as the homotopy path. In this paper, we address the minimization problem ∥y−Ax∥22+λ∥x∥0\|y-Ax\|_2^2+\lambda\|x\|_0 for a continuum of λ\lambda's and propose two heuristic search algorithms for ℓ0\ell_0-homotopy. Continuation Single Best Replacement is a forward-backward greedy strategy extending the Single Best Replacement algorithm, previously proposed for ℓ0\ell_0-minimization at a given λ\lambda. The adaptive search of the λ\lambda-values is inspired by ℓ1\ell_1-homotopy. ℓ0\ell_0 Regularization Path Descent is a more complex algorithm exploiting the structural properties of the ℓ0\ell_0-regularization path, which is piecewise constant with respect to λ\lambda. Both algorithms are empirically evaluated for difficult inverse problems involving ill-conditioned dictionaries. Finally, we show that they can be easily coupled with usual methods of model order selection.Comment: 38 page

    On the properties of the solution path of the constrained and penalized L2-L0 problems

    Get PDF
    12 pagesTechnical report on the properties of the L0-constrained least-square minimization problem and the L0-penalized least-square minimization problem: domain of optimization, notion of solution path, properties of the "penalized" solution path..

    Tensor-based framework for training flexible neural networks

    Full text link
    Activation functions (AFs) are an important part of the design of neural networks (NNs), and their choice plays a predominant role in the performance of a NN. In this work, we are particularly interested in the estimation of flexible activation functions using tensor-based solutions, where the AFs are expressed as a weighted sum of predefined basis functions. To do so, we propose a new learning algorithm which solves a constrained coupled matrix-tensor factorization (CMTF) problem. This technique fuses the first and zeroth order information of the NN, where the first-order information is contained in a Jacobian tensor, following a constrained canonical polyadic decomposition (CPD). The proposed algorithm can handle different decomposition bases. The goal of this method is to compress large pretrained NN models, by replacing subnetworks, {\em i.e.,} one or multiple layers of the original network, by a new flexible layer. The approach is applied to a pretrained convolutional neural network (CNN) used for character classification.Comment: 26 pages, 13 figure

    On factorization of rank-one auto-correlation matrix polynomials

    Full text link
    This article characterizes the rank-one factorization of auto-correlation matrix polynomials. We establish a sufficient and necessary uniqueness condition for uniqueness of the factorization based on the greatest common divisor (GCD) of multiple polynomials. In the unique case, we show that the factorization can be carried out explicitly using GCDs. In the non-unique case, the number of non-trivially different factorizations is given and all solutions are enumerated
    • …
    corecore